14 research outputs found

    Investigation of the performance of multi-input multi-output detectors based on deep learning in non-Gaussian environments

    Get PDF
    The next generation of wireless cellular communication networks must be energy efficient, extremely reliable, and have low latency, leading to the necessity of using algorithms based on deep neural networks (DNN) which have better bit error rate (BER) or symbol error rate (SER) performance than traditional complex multi-antenna or multi-input multi-output (MIMO) detectors. This paper examines deep neural networks and deep iterative detectors such as OAMP-Net based on information theory criteria such as maximum correntropy criterion (MCC) for the implementation of MIMO detectors in non-Gaussian environments, and the results illustrate that the proposed method has better BER or SER performance

    Hamiltonian Adaptive Importance Sampling

    Get PDF

    Neural Generalization of Multiple Kernel Learning

    Full text link
    Multiple Kernel Learning is a conventional way to learn the kernel function in kernel-based methods. MKL algorithms enhance the performance of kernel methods. However, these methods have a lower complexity compared to deep learning models and are inferior to these models in terms of recognition accuracy. Deep learning models can learn complex functions by applying nonlinear transformations to data through several layers. In this paper, we show that a typical MKL algorithm can be interpreted as a one-layer neural network with linear activation functions. By this interpretation, we propose a Neural Generalization of Multiple Kernel Learning (NGMKL), which extends the conventional multiple kernel learning framework to a multi-layer neural network with nonlinear activation functions. Our experiments on several benchmarks show that the proposed method improves the complexity of MKL algorithms and leads to higher recognition accuracy
    corecore